9 research outputs found
CASTOR status and evolution
In January 1999, CERN began to develop CASTOR ("CERN Advanced STORage
manager"). This Hierarchical Storage Manager targetted at HEP applications has
been in full production at CERN since May 2001. It now contains more than two
Petabyte of data in roughly 9 million files. In 2002, 350 Terabytes of data
were stored for COMPASS at 45 MB/s and a Data Challenge was run for ALICE in
preparation for the LHC startup in 2007 and sustained a data transfer to tape
of 300 MB/s for one week (180 TB). The major functionality improvements were
the support for files larger than 2 GB (in collaboration with IN2P3) and the
development of Grid interfaces to CASTOR: GridFTP and SRM ("Storage Resource
Manager"). An ongoing effort is taking place to copy the existing data from
obsolete media like 9940 A to better cost effective offerings. CASTOR has also
been deployed at several HEP sites with little effort. In 2003, we plan to
continue working on Grid interfaces and to improve performance not only for
Central Data Recording but also for Data Analysis applications where thousands
of processes possibly access the same hot data. This could imply the selection
of another filesystem or the use of replication (hardware or software).Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 2 pages, PDF. PSN TUDT00
Status Report of the DPHEP Study Group: Towards a Global Effort for Sustainable Data Preservation in High Energy Physics
Data from high-energy physics (HEP) experiments are collected with
significant financial and human effort and are mostly unique. An
inter-experimental study group on HEP data preservation and long-term analysis
was convened as a panel of the International Committee for Future Accelerators
(ICFA). The group was formed by large collider-based experiments and
investigated the technical and organisational aspects of HEP data preservation.
An intermediate report was released in November 2009 addressing the general
issues of data preservation in HEP. This paper includes and extends the
intermediate report. It provides an analysis of the research case for data
preservation and a detailed description of the various projects at experiment,
laboratory and international levels. In addition, the paper provides a concrete
proposal for an international organisation in charge of the data management and
policies in high-energy physics
Recommended from our members
Status Report of the DPHEP Study Group: Towards a Global Effort for Sustainable Data Preservation in High Energy Physics
Towards automation of computing fabrics using tools from the fabric management workpackage of the EU DataGrid project
This article describes the architecture behind the designed fabric management system and the status of the different developments. It also covers the experience with an existing tool for automated configuration and installation that have been adapted and used from the beginning to manage the EU DataGrid testbed, which is now used for LHC data challenge
Data Preservation in High Energy Physics
Data from high-energy physics (HEP) experiments are collected with significant financial and human effort and are mostly unique. At the same time, HEP has no coherent strategy for data preservation and re-use. An inter-experimental Study Group on HEP data preservation and long-term analysis was convened at the end of 2008 and held two workshops, at DESY (January 2009) and SLAC (May 2009). This document is an intermediate report to the International Committee for Future Accelerators (ICFA) of the reflections of this Study Group